6 research outputs found

    Evaluation der diagnostischen Genauigkeit eines Systems zur computergestützten fazialen Phänotypisierung syndromaler Patientinnen und Patienten

    Get PDF
    In der Syndromologie ist die computergestützte Gesichtsanalyse von Patient*innen mit fazialen Dysmorphien zu einem bedeutenden Instrument in der Diagnostik genetisch-syndromaler Erkrankungen geworden. Durch maschinelles Lernen werden Softwareprogramme wie Face2Gene, Clinical Face Phenotype Space und FaceBase in der Erkennung dysmorpher Gesichtszüge durch automatisierte Bildanalyse trainiert. Abhängig von der Übereinstimmung zwischen dem Bild einer Person und dem System zugrunde liegenden Bildern anderer Betroffener wird eine Liste von Differentialdiagnosen präsentiert. Angesichts der hohen Kosten genetischer Analysen und der Seltenheit einzelner genetischer Syndrome kann die automatisierte Bildanalyse Kliniker*innen helfen, eine diagnostische Odyssee zu verkürzen. Face2Gene ist allerdings so angelegt, dass jedem Bild eine Liste von Differentialdiagnosen zugeordnet wird. Bilder von fazial unauffälligen Personen können also nicht als solche erkannt werden. Diese Studie prüft 1) Face2Gene’s Sensitivität, 2) ob sich die Gestalt Scores für syndromale Gesichter von denen für unauffällige Gesichter signifikant unterscheiden und 3) wie sich die vorgeschlagenen Differentialdiagnosen innerhalb der gesunden Kontrollkohorte verteilen (Spezifität des Systems) und 4) ob der ethnische Hintergrund bzw. das Geschlecht Face2Gene’s diagnostische Genauigkeit beeinflussen.In syndromology, computer-aided facial analysis of patients with facial dysmorphisms has become a significant tool in the diagnosis of genetic syndromic disorders. Through machine learning, software such as Face2Gene, Clinical Face Phenotype Space, and FaceBase is trained in the detection of facial dysmorphic features through automated image analysis. Based on comparison to previous images, a list of differential diagnoses is presented. Given the high cost of genetic analysis and the rarity of individual genetic syndromes, automated image analysis can help clinicians shorten a diagnostic odyssey. However, images of facially inconspicuous individuals cannot be identified as such. This study tests 1) Face2Gene's sensitivity, 2) whether Gestalt scores of syndromic faces differ significantly from those of inconspicuous faces, and 3) how the suggested differential diagnoses are distributed within the healthy control cohort (specificity of the system), and 4) whether ethnic background or gender affect Face2Gene's diagnostic accuracy

    Efficiency of Computer-Aided Facial Phenotyping (DeepGestalt) in Individuals With and Without a Genetic Syndrome: Diagnostic Accuracy Study

    Get PDF
    Background: Collectively, an estimated 5% of the population have a genetic disease. Many of them feature characteristics that can be detected by facial phenotyping. Face2Gene CLINIC is an online app for facial phenotyping of patients with genetic syndromes. DeepGestalt, the neural network driving Face2Gene, automatically prioritizes syndrome suggestions based on ordinary patient photographs, potentially improving the diagnostic process. Hitherto, studies on DeepGestalt’s quality highlighted its sensitivity in syndromic patients. However, determining the accuracy of a diagnostic methodology also requires testing of negative controls. Objective: The aim of this study was to evaluate DeepGestalt's accuracy with photos of individuals with and without a genetic syndrome. Moreover, we aimed to propose a machine learning–based framework for the automated differentiation of DeepGestalt’s output on such images. Methods: Frontal facial images of individuals with a diagnosis of a genetic syndrome (established clinically or molecularly) from a convenience sample were reanalyzed. Each photo was matched by age, sex, and ethnicity to a picture featuring an individual without a genetic syndrome. Absence of a facial gestalt suggestive of a genetic syndrome was determined by physicians working in medical genetics. Photos were selected from online reports or were taken by us for the purpose of this study. Facial phenotype was analyzed by DeepGestalt version 19.1.7, accessed via Face2Gene CLINIC. Furthermore, we designed linear support vector machines (SVMs) using Python 3.7 to automatically differentiate between the 2 classes of photographs based on DeepGestalt's result lists. Results: We included photos of 323 patients diagnosed with 17 different genetic syndromes and matched those with an equal number of facial images without a genetic syndrome, analyzing a total of 646 pictures. We confirm DeepGestalt’s high sensitivity (top 10 sensitivity: 295/323, 91%). DeepGestalt’s syndrome suggestions in individuals without a craniofacially dysmorphic syndrome followed a nonrandom distribution. A total of 17 syndromes appeared in the top 30 suggestions of more than 50% of nondysmorphic images. DeepGestalt’s top scores differed between the syndromic and control images (area under the receiver operating characteristic [AUROC] curve 0.72, 95% CI 0.68-0.76; P<.001). A linear SVM running on DeepGestalt’s result vectors showed stronger differences (AUROC 0.89, 95% CI 0.87-0.92; P<.001). Conclusions: DeepGestalt fairly separates images of individuals with and without a genetic syndrome. This separation can be significantly improved by SVMs running on top of DeepGestalt, thus supporting the diagnostic process of patients with a genetic syndrome. Our findings facilitate the critical interpretation of DeepGestalt’s results and may help enhance it and similar computer-aided facial phenotyping tools

    PEDIA: prioritization of exome data by image analysis.

    Get PDF
    PURPOSE: Phenotype information is crucial for the interpretation of genomic variants. So far it has only been accessible for bioinformatics workflows after encoding into clinical terms by expert dysmorphologists. METHODS: Here, we introduce an approach driven by artificial intelligence that uses portrait photographs for the interpretation of clinical exome data. We measured the value added by computer-assisted image analysis to the diagnostic yield on a cohort consisting of 679 individuals with 105 different monogenic disorders. For each case in the cohort we compiled frontal photos, clinical features, and the disease-causing variants, and simulated multiple exomes of different ethnic backgrounds. RESULTS: The additional use of similarity scores from computer-assisted analysis of frontal photos improved the top 1 accuracy rate by more than 20-89% and the top 10 accuracy rate by more than 5-99% for the disease-causing gene. CONCLUSION: Image analysis by deep-learning algorithms can be used to quantify the phenotypic similarity (PP4 criterion of the American College of Medical Genetics and Genomics guidelines) and to advance the performance of bioinformatics pipelines for exome analysis

    Characterization of glycosylphosphatidylinositol biosynthesis defects by clinical features, flow cytometry, and automated image analysis

    Get PDF
    textabstractBackground: Glycosylphosphatidylinositol biosynthesis defects (GPIBDs) cause a group of phenotypically overlapping recessive syndromes with intellectual disability, for which pathogenic mutations have been described in 16 genes of the corresponding molecular pathway. An elevated serum activity of alkaline phosphatase (AP), a GPI-linked enzyme, has been used to assign GPIBDs to the phenotypic series of hyperphosphatasia with mental retardation syndrome (HPMRS) and to distinguish them from another subset of GPIBDs, termed multiple congenital anomalies hypotonia seizures syndrome (MCAHS). However, the increasing number of individuals with a GPIBD shows that hyperphosphatasia is a variable feature that is not ideal for a clinical classification. Methods: We studied the discriminatory power of multiple GPI-linked substrates that were assessed by flow cytometry in blood cells and fibroblasts of 39 and 14 individuals with a GPIBD, respectively. On the phenotypic level, we evaluated the frequency of occurrence of clinical symptoms and analyzed the performance of computer-assisted image analysis of the facial gestalt in 91 individuals. Results: We found that certain malformations such as Morbus Hirschsprung and diaphragmatic defects are more likely to be associated with particular gene defects (PIGV, PGAP3, PIGN). However, especially at the severe end of the clinical spectrum of HPMRS, there is a high phenotypic overlap with MCAHS. Elevation of AP has also been documented in some of the individuals with MCAHS, namely those with PIGA mutations. Although the impairment of GPI-linked substrates is supposed to play the key role in the pathophysiology of GPIBDs, we could not observe gene-specific profiles for flow cytometric markers or a correlation between their cell surface levels and the severity of the phenotype. In contrast, it was facial recognition software that achieved the highest accuracy in predicting the disease-causing gene in a GPIBD. Conclusions: Due to the overlapping clinical spectrum of both HPMRS and MCAHS in the majority of affected individuals, the elevation of AP and the reduced surface levels of GPI-linked markers in both groups, a common classification as GPIBDs is recommended. The effectiveness of computer-assisted gestalt analysis for the correct gene inference in a GPIBD and probably beyond is remarkable and illustrates how the information contained in human faces is pivotal in the delineation of genetic entities
    corecore